Designing and deploying effective models for generating multiple versions of auto-marked questions
2024-12-02
or go to link.lizabolton.com
Presented at NZSA 2024 @ Te Herenga Waka Victoria University of Wellington
This project has received funding from the Faculty of Science Scholarship of Teaching and Learning Fund.
This study was approved by the University of Auckland Human Participants Ethics Committee (ref: UAHPEC27494).
This talk has three aims:
This part will meet the aim of iNZight considerations and opportunities and has the benefit of introducing some of ideas for aim 3.
Automarked questions tend to come in the following types:
How a student might get that answer: - Recall (a year, a rule of thumb) - Identify the number from text - Perform a calculation - Interact with data :::notes T or F might come with a bunch of distractors, and that is where historically a lot of question writing work has gone
:::
Student use these quizzes for revision quite effectively.
Some student will brute force it to try to get their 10/10 without understanding fully what they’re supposed to be doing.
Makes it easier on us when writing portions of the tests and exams as we can draw on styles of question they’re very familiar with but put them in the new data context.
Does this ruin your class average becasue thet all get 10s? Nope.
Do you do this for the test and exam? Nope. The current tech support drama of one testing platform is more than enough for high-stakes assessments, so we don’t have students use iNZight in the test and exam
📦 Working towards a package that would make it easier to write these question generating models in R and then set up the components appropriately for Canvas, Inspera, HTML, etc.
👥 Understanding student interaction profiles
Liza to go look at the ethics
Looking at design and looking at what’s coming out of the use of tools, how do we
Slides: link.lizabolton.com